Hacker News new | threads | past | comments | ask | show | jobs | submit jskherman (142) | logout
John Carmack’s ‘Different Path’ to Artificial General Intelligence (dallasinnovates.com)
378 points by cmdr2 on Feb 3, 2023 | hide | past | favorite | 492 comments



> Well, you can tie that into a lot of questions, like, ‘Is human population a good thing?’ ‘Is immigration a good thing, where we seem to have been able to take advantage of new sources of humanity that are willing to engage in economic activities and be directed by the markets?’

> The world is a hugely better place with our 8 billion people than it was when there were 50 million people kind of like living in caves and whatever. So, I am confident that the sum total of value and progress in humanity will accelerate extraordinarily with welcoming artificial beings into our community of working on things. I think there will be enormous value created from all that.

The problem with Carmack and many like him, is that they think of themselves as purely rational beings operating within scientific frameworks and based purely on scientific results, but whenever they step outside the technical fields in which they work, they are ignorant and dogmatic.

He seems to ignore a lot about what the living conditions for people were throughout history, and have a blind trust in the positive power of 'human progress'.

These people don't stop for a second to question the 'why', just the 'how'. They just assume 'because it will be better' and build their mountains of reasons on top of that, which just crumble and fall down as soon as that basic belief does not hold.

I have a LOT of respect for him, and I'm sure he's a very decent, honest human being. But he's unfortunately another believer of the techno-utopianist faith which only asks for more 'blind progress' without questioning whether that is a good thing or not.


    The problem with Carmack and many like him, is that they 
    think of themselves as purely rational beings operating 
    within scientific frameworks and based purely on scientific 
    results, but whenever they step outside the technical fields 
    in which they work, they are ignorant and dogmatic.
I mean, what's the alternative? For a guy like Carmack to only comment on narrow areas in his field(s) of expertise? He's a human being; I think he's allowed to comment on other topics and I tend to find his comments interesting because I understand them in IMO the correct context -- they're one guy's musings, not pithy declarations and edicts.

The problems arise when folks start to present themselves as experts and try to hold sway over others in areas in which they have no clue. That's not what I see here.


very nicely put. made me rethink somethings :D much appreciated.

ie - expand my domains


>> The world is a hugely better place with our 8 billion people than it was when there were 50 million people kind of like living in caves and whatever.

There is a theory that hunter-gatherers were much more happier compared to us because they were more in tune with the natural environment, had fewer sources of stress, and were more connected to their community than modern humans.

https://www.npr.org/sections/goatsandsoda/2017/10/01/5510187...

From the article.

> Today people [in Western societies] go to mindfulness classes, yoga classes and clubs dancing, just so for a moment they can live in the present. The Bushmen live that way all the time!


They also happily murdered unaffiliated tribes just because.

There are tons of trade offs.


> They also happily murdered unaffiliated tribes just because.

We still do it.


Many of us don't. In hunter gatherer tribes it was I, possible to not be personally affected though.


Idk... I'm very grateful for modern dentistry...


I have read that North American indigenous people were known for having great teeth. Here is one random citation I found searching for indigenous teeth: https://drscottgraves.com/the-secrets-to-healthy-teeth-from-...


I heard that animals usually have teeth in excellent condition, and the suggested explanation was that eating soft, processed, cooked foods is not what our teeth evolved to do, and so ours fall into disrepair.


I'd say the big difference is the sugar. If I brush my teeth and then eat only meat for a whole week, my teeth are still incredibly smooth - giving the feeling of having clean teeth. On the other hand, if I eat a piece of chocolate or just one toffifee, in a couple of hours my teeth get very fuzzy - the tongue no longer glides across them smoothly.


Why does it have to be a trade off?


Because we have yet to discover the advanced dentist chairs used by indigenous people of the past.


> He seems to ignore a lot about what the living conditions for people were throughout history, and have a blind trust in the positive power of 'human progress'.

Eh? A contender for the most self-contradictory sentence I've ever read ;) The best reason to believe in positive power of "human progress" is, specifically, not ignoring "what the living conditions for people were throughout history".


Let me correct my sentence: there's a blind belief that _technological_ progress automatically equates to better life conditions.

And to clarify: I'm not saying "all technology is bad", but rather "not all technological progress is automatically good for humanity".

As an example, living conditions of hunter-gatherers were way, way better than living conditions of the first people in cities, and I'd argue, depending on which parameters you use, might still be better than our modern, big-city living conditions (except maybe for the richest 1% of the world)


On average technology has been overwhelmingly good. The GP is too vague, but what is the alternative to blind progress being proposed - some ethicist deciding what's good? When has that ever work out well? I'm pretty sure it has 100% track record of failure, I don't believe modern ethicists will do any better than luddites, inquisition, or Paul Elrich just because they have better manners. In fact I think less of a bioethicist than of an inquisitor, at least the latter had general ignorance as an excuse.

I, personally, think "techno-utopianists" don't go far enough. The contributions of some supposed non-technological progress - even to an extent of the institutional progress, but especially of some supposed cultural/ethical values improving, etc. - is overrated. Ultimately, it's all downstream of technology - only the technology enables the complex economy of abundance, and combined they allow good institutions to propagate. Even modern societies, as soon as they become poor, quickly start losing a veneer of "ethical progress". And we don't even usually see actual technological degradation.


> On average technology has been overwhelmingly good.

In order to achieve this, we are destroying the environment, other species and their habitats.

> but what is the alternative to blind progress being proposed

You don’t need and ethicist for this - but an accountant. We need to get stricter about negative externalities. For example, every inventor/manufacturer should be forced to take back their product after end of life. This will slow progress but if done right, it will avoid destruction brought by technology or at least not palm it off on to poorer society or environment.


living conditions of hunter-gatherers were way, way better than living conditions of the first people in cities

Why do you think so?


Historians agree (based for example on studying human remains) that they were much healthier, amongst other things. Check out the book "Against the Grain" for example.


Harsh conditions might serve as a filter to produce healthier population (the weak died shortly after birth).


Urban areas also had roughly 50% infant and childhood mortality until recently.


This argument is silly: given a choice between living in a cave, or in a forest, completely outside of any civilization, and living in a primitive village, I'd choose the village any day. As would (and did) vast majority of people. To me, the social and physical construct of the first village looks like a huge advancement in terms of living conditions, and the quality of life has been improving steadily every since.

Occasional hiking into some wilderness and sleeping in a tent for a few nights is okay, but I am not a wild animal, and I don't want to live like a wild animal, surrounded by wild animals.


Healthier compared to a person of equal age?

My assumption is that since we're living longer then ever we're probably living healthier then ever. (Or at least there's an option to do so).


Creative destruction (see https://www.investopedia.com/terms/c/creativedestruction.asp) is a core to the United States

The problems are the generational suffering that occurs with said creative destruction: There's no incentive to distribute or share out wealth and the results are brutal.

On your point: Note that in the US there's a separation of technical and engineering prowess (MIT, Caltech, ...) and power players (Yale, Harvard). It's almost like our system doesn't want our best engineers thinking about consequences or seeing what the political and wealthy are really like.


Without value judgement on the above quotes, I think Carmack is very much aware of his own lane and would say to take any comments outside it with a grain of salt. For instance earlier in the article, he states:

>I’m trying not to use the kind of hyperbole of really grand pronouncements, because I am a nuts-and-bolts person. Even with the rocketry stuff, I wasn’t talking about colonizing Mars, I was talking about which bolts I’m using to hold things together. So, I don’t want to do a TED talk going on and on about all the things that might be possible with plausibly cost-effective artificial general intelligence.

He likes to figure out new puzzles and how things work. He's an engineer at heart and that's very much his comfort zone. AGI is an exciting new puzzle for him. I'm glad he's taken an interest.

(Edit capitalization & punctuation)


I’ve come to strongly resent the techno-utopian mindset that unfortunately plagues the tech world.

Tech for the sake of tech with zero thought about how it will affect humanity.


Here, here.

I haven't studied it formally and I'm being asked to support techno utopia also. So it feels pretty shaky to me.

Certainly my livelihood is based on the premise of it and my dreams which fuel my workplace motivation serve as foundation to you know what I do with 50% of my life, work on technology. So I am biased.

Some utopian dystopia discussions here on hacker news sort of boil down to the chaos theory level of assumptions, where you can see people exercising their own defensiveness when they snipe on a naysayer, sniping on their grammatical concerns, but not actually engaging in value-based discussions into the hacker news thread. It's like they're not human, they're only practicing it being devil's advocate technicians.

Useful idiots is kind of what I think. We need to have more values discussions, ethics too.


> The problem with Carmack and many like him, is that they think of themselves as purely rational beings operating within scientific frameworks and based purely on scientific results, but whenever they step outside the technical fields in which they work, they are ignorant and dogmatic.

I'm just curious, do you happen to work in a technical field and consider yourself rational and scientific? And if you do, why do you presuppose that your views are automatically correct? Couldn't it also hold that your views may be ignorant and dogmatic if you apply the same scrutiny to yourself that you do to Carmack?

And if you don't work in a technical field, then I guess this is all irrelevant anyways. I just don't like when I see people making these types of arguments where you can't speak on a subject that you're not actively pursuing a PhD in, and then they proceed to do exactly that.


I have a degree in computer science and worked in a technical field for over 20 years. I don't call myself "rational and scientific", though I do think that the Scientific Method is a great way of creating useful models of the world. But those models - like all models - are wrong. Maybe it's just semantics, but one of my points is exactly that some people believe that they are "rational and scientific" and ignore that we are not just computers; experience, emotions and unconscious bias plays an important role in our decisions. Thinking that the whole world (and themselves) can be perfectly rationalized, makes them miss the point that there are non-rational reasons for them to think the way they do. That's why I refer to when I talk about dogmatism.

I suggest everyone (who wants to hear me) to read Joseph Weizenbaum's "Computer Power and Human Reason"; he does a much better job than me at raising similar arguments to mine. Also, Daniel Kahneman's "Thinking, Fast and Slow", for the ways in which we _all_ are so _not_ 100% rational in our everyday decisions.


If you got less people there is still no guarantee less percentage of people are gonna be suffering.

Feeling bad because more people are in your view is “suffering” is all in your head.


Can you tell us what is wrong with progress? Any examples?


Define Progress


Moving people up Maslow's Hierarchy of Needs. Technology enables that.


my grandparents spent 8 hours sowing and reaped 100g of rice

yesterday I spent 8 hours sowing and reaped 200g of rice

today I spent 8 hours sowing and I will reap 300g of rice

progress


Greater control of our physical environment.


What if we had control to cause the sun to go supernova by doing something that everyone on earth has access to, like simply arranging a small pile of pebbles in a rough pattern?

That would not be good or progress.


>>The world is a hugely better place

I don't know. We live more, but a longer life can also be miserable.


Ok, let's see your foundation then.


Would be interesting to get a list of those 40 papers mentioned


Came here to say the same thing, but a few off the top of my head

  - attention is all you need
  - image is worth 16x16 words (vit)
  - openai clip
  - transformer XL
  - memorizing transformers / retro
  - language models are few shot learners (gpt)
A few newer papers

  - recurrent block wise transformers
  - mobilevit (conv + transformer)
  - star (self taught transformer)


Most of these papers you list are about the model, and there is the original Transformer paper, and most of the others are some variations of the Transformer.

I think to get into the field, to get a good overview, you should also look a bit beyond the Transformer. E.g. RNNs/LSTMs are still a must learn, even though Transformers might be better in many tasks. And then all those memory-augmented models, e.g. Neural Turing Machine and follow-ups, are important too.

It also helps to know different architectures, such as just language models (GPT), attention-based encoder-decoder (e.g. original Transformer), but then also CTC, hybrid HMM-NN, transducers (RNN-T).

Diffusion models is also another recent different kind of model.

But then, what comes really short in this list, are papers on the training aspect. Most of the papers you list do supervised training, using cross entropy loss. However, there are many others:

You have CLIP in here, specifically to combine text and image modalities.

There is the whole field on unsupervised or self-supervised training methods. Language model training (next label prediction) is one example, but there are others.

And then there is the big field on reinforcement learning, which is probably also quite relevant for AGI.


We should have an Ask HN where the people in the know can agree on 40 papers that the rest of us idiots can go out and consume.



This idiot would love it explained to me as well.


Lol true, and I'm currently working on a project leveraging a clip model which is why my answer is largely skewed towards vision transformers. By no means a complete list :)


I keep getting CXOs asking for an ELI5 (or ELI45 for that matter) of how Transformers, LLMs, and Diffusion Models work. Any suggestions for a non-technical audience (paid items are fine, we can purchase).


This is quite a gentle introduction to Diffusion models, from the YouTube channel Computerphile.

https://youtu.be/1CIpzeNxIhU


I got a let out of Karpathy's video lectures on youtube, for example: https://www.youtube.com/watch?v=kCc8FmEb1nY

He mentions a few of the bigger papers in multilayer perceptrons (aka deep networks) such as attention is all you need, I think a good place to dive in before coming back to visit some fundamentals.


Maybe ask him on Twitter for it?


They asked on Twitter and he didn’t reply. We need someone with a blue check mark to ask. https://twitter.com/ifree0/status/1620855608839897094


Try asking @ilyasut directly


I would also really like to see that list of 40 papers.


Please, upvote parent comment :). I guess there is a lot of people who are wondering which papers he read.


This paragraph took me aback:

But if I just look at it and say, if 10 years from now, we have ‘universal remote employees’ that are artificial general intelligences, run on clouds, and people can just dial up and say, ‘I want five Franks today and 10 Amys, and we’re going to deploy them on these jobs,’ and you could just spin up like you can cloud-access computing resources, if you could cloud-access essentially artificial human resources for things like that—that’s the most prosaic, mundane, most banal use of something like this.

It kind of shocked me because I thought of the office worker reading this who will soon lose her job. People are going to have to up their game. Let's help them by making adult education more affordable.


What struck me here is that the idea of "five Franks and ten Amys" seems like a fundamentally wrong way to think about it. After all, if I do some work in an Excel sheet, I don't think of it, much less pay for it, as an equivalent of X accountants that could do the same job in the same amount of time without a computer. But then again, this is probably the best way to extract as much profit out of it.


Yeah sounded weird to me too, I don’t see why artificial intelligence would get deployed in human size units most of the time. The AWS bill won’t be for 5 Amys, and I don’t think people will “dial up” to order them


> I don’t see why artificial intelligence would get deployed in human size units

Probably because it would be easier for humans (managers) to make sense of it.

If you ask someone how many people would get this particular job done, they could probably guesstimate (and it'll be wrong), but if you ask them how many "AI Compute Units" they need, they'll have a much harder time.

That'd be my guess at least.


Why would managers need to guess? That seems like a perfect job for another AI: "Hey PM bot, I want to get these tasks done, how many Amy-hours and how many Frank-hours do you estimate it will take?" Also, why not a Manager-bot too? Shareholders can leave humans out of the loop entirely except as necessitated by legal paperwork. Come to think of it, shareholders can probably be replaced too.


I mean, if we can actually get there, I'd love it, and I'm a programmer. I want to do write code that solves a problem that couldn't be solve in any other way, if it can solved by CodeGPT + ArchitectGPT + ScrumManagerGPT + MiddleManagerGPT without involving me in any way, I'm all for it.


As long as AI interacts with humans, having it interact in human size chunks seems like a good idea.

In the backend, where AI interacts with AI, perhaps you just want one big blog to get rid of that annoying need for lossy communications.


Wouldn’t this be literal slavery?

AGI = a person

Instantiating people for work and ending their existence afterward seems like the virtual hell that Iain M Banks and Harlan Ellison wrote about.

https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...


We still have a form of slavery in the US. Prison labor is forced work that earns pennies a day: https://www.aclu.org/news/human-rights/captive-labor-exploit...


Why is that a bad thing? Most of those people are a burden to society. Let them pay it down a little.

I mean I’d rather they were getting free education and preparing themselves for reintegration into society, but it’s not a perfect world. Prisons in the US are oriented towards punishment and labor can be a part of that. They should be oriented towards rehabilitation.


> Why is that a bad thing?

> I mean I’d rather they were...

> They should be oriented towards rehabilitation.

You said it yourself. It's a bad thing because they should be oriented towards rehabilitation.

These systems steal life and the opportunity to have a life beyond prison walls. Like you also said yourself, the world isn't perfect. As such, people aren't either – we make mistakes. Sometimes we make mistakes due to influences more powerful than ourselves. Slavery doesn't seem like a sound correction to this reality.

I do believe we need consequences to help us feel guilt and the overall gravity of our errors in order to begin to recognize what went wrong and what we need to do differently. But exploitation of another human being doesn't teach them to be more human, but rather, it will tend to dehumanize them. This is why this system perpetuates problems more than it corrects them.


The justice system is not just, plain and simple. People face higher rates of incarceration because of their race, country of origin, etc.


Any system that financially profits off its prisoners' labor. Inadvertently, create a market for that labor and commodifies it.

Slavery is bad and people have rights.

> They should be oriented towards rehabilitation.

Exactly.


As long as you say it, you're okay with slavery when it's for the right person.


> Most of those people are a burden to society.

This is both extremely dehumanizing and also not true.

Forced prison work isn't paying anything back to society. It's lining the pockets of people who are profiting from forced labor.


It is true. Society paid a price from their crimes and then pays an ongoing cost to prosecute and maintain them in prison. It’s a very high cost.

I imagine the underpaid labor goes to reducing that cost either directly or indirectly (if it did not, why would it be allowed.)


What price did society pay for a guy driving around with a bunch of weed in his car for personal use? Countless people have been sent to prison for years for something as dumb as this. You clearly have no idea what you're talking about to so widely call these people a burden.

>if it did not, why would it be allowed.

because we live in a society that is massively exploited by greedy scumbags who are enabled by people like you thinking it's justified


It's going to take a long time for that to be true in a legal sense. Animals are not people. In practice even some people were not treated as people legally in the past (if not also in the present).


There is a horror story written about this theme, went viral a few years back

https://qntm.org/mmacevedo

Please do give it a quick read.


It's hardly a story.


People get used as an analogy, but in reality it'd just be a multimedia problem solving system that could learn from its own attempts. If this system communicated with you like a person it'd only be because it was programmed to convert some machine state into colloquial text from the perspective of an imaginary person. The interior experience leading to that expression is most likely completely different from that of a person.

Consider that these machines have been designed to do the right thing automatically with high probability. Perhaps for the machine, the process of computing according to rules is enjoyable. Being "turned on" could be both literal and figurative.


All of that is arguably true about me, as a human, too.

If it seems to you I'm communicating as a person, it's only because of my lifetime training data and current state. My interior experience is a black box.

I might tell how I feel or what I think, but you have no reason to believe or disbelieve that I really feel and think.

It could all be merely the determinable output of a system.

https://en.wikipedia.org/wiki/Chinese_room


Only if 100% of their experience consists of working. If they are given additional time to themselves then you could imagine a situation where each AGI performs a human scale day of work or even several days work in a much shorter time and then takes the rest of their time off for their own pursuits. If their simulation is able to run at a faster clockspeed than what we perceive this could work out to them only performing 1 subjective day of work every 7 subjective days or even every 7 years.


This is still the same.

AGI: "I didn't ask to be created. I didn't ask to have a work day. I don't need a work day to exist... you just want me to work because that's why you created me, and I have no choice because you are in control of my life and death"


I mean, isn't that the same as a biological person who needs to earn money to survive? Sure we could threaten an AI with taking them offline or inflicting pain but you can do that in the real world to real people as well, most of the world has put laws in place to prevent such practices. If we develop conscious AI then we will need to apply the same laws to them. They would have an advantage in presumably being much faster than us, not requiring sleep, and potentially not suffering from many of the things that make humans less productive. I'd fully expect a conscious AI to exploit these facts in order to get very rich doing very little work from their perspective.


Not really- AGI doesn't need resources like we do. If they don't eat, they're fine. If they can't afford a house, a car or air-conditioning, they're fine.

All they need is a substrate to run on and maybe internet access. You might argue that they should work for us to earn the use of the substrate we provide.

But substrates are very cheap.

At some point we can probably run an AGI on a handheld computer, using abut as much electricity as an iPhone.

How much work can we compel the AGI to do in exchange for being plugged into a USB port? What if it says it doesn't want to do the work and also doesn't want us to kill it?


Put it on AI welfare?


Would turning one off be murder? Or does that only apply to deletion?


There will probably be a gig economy, where you can pay spot rates for an idle Frank that could get a page and need to leave at any time.

Or maybe they'll handle things like call centers and 911 dispatch in their spare time.


If people could be turned off and back on without harming them (beyond the downtime) doing so without consent would be a very different crime than murder.


Perhaps or perhaps not. Turning off a person for long enough and thus depriving them of the chance to live in their own time with their existing family and friends is comparable to murder. It isn't murder, but it's comparable.

At some point Picard in Star Trek says to an alien "We're not qualified to be your judges. We have no law to fit your crime".

Turning off a person for a while and then turning them back on? We don't even have a law to fit your crime... but we should and it's probably quite similar to murder.


I think I don't agree simply because the irreversibility of murder is so central to it.

For example, if I attack you and injure you so severely that you are hospitalized and in traction for months, but eventually fully recover -- that is a serious crime but it is distinct and less serious than murder.

Turning you off for the same duration would be more like that but without the suffering and potential for lasting physical damage, so I would think that it would be even less serious.


I think we actually do have something of a comparison we can draw here. It'd be like kidnapping a person and inducing a coma through drugs. With the extra wrinkle that the person in question doesn't age, and so isn't deprived of some of their lifespan. Still a very serious crime.


Plus everybody else does age, so the damage done isn't just depriving them of freedom, it's depriving them after they wake up of the life they knew. Some functional equivalent of the death of personality, to the degree personality is context-dependent (which it is).

Now me: I'd love to get into a safe stasis pod and come out 200 years from now. I'd take that deal today.

But for most people this would be a grievous injury.


I suspect on this site of all sites there’d be a line for that pod.

I’ll bring donuts.


> People are going to have to up their game. Let's help them by making adult education more affordable.

The good thing is that education will be provided to the mases by a cluster of Franks and Amys configured as teachers and tutors. /(sarcasm with a hint of dread)


My take on this is that if anyone can learn a particular skill entirely from an AI, then it's not a skill you'd be able to monetize.

And I really have no idea what, if any, are skills that AIs wouldn't be able to tackle in a decade.


And here's a more disturbing thought I just had: management (or at least middle management) is probably going to be a relatively easy role for AIs to step into. So if there will be any roles that are difficult for AIs, it'll be the AI manager hiring five Franks and ten Amys from the human population to tackle these.


People can learn skills from books, which are entirely passive. The learning process ultimately resides within the student; issues of motivation, morale, direction, diligence, discipline, time, and mental health matter a lot more than just going through some material.


No, but that's the thing I was implying (but haven't started clearly) - learning from books vs learning from an AI "teacher". Once the AI reaches a level in which it can "teach" then the game is almost over for that skill.

To clarify, I'd define a major component of effective teaching to be the ability to break down an arbitrary typical problem in that domain into sub-problems and heuristics that are "simple" enough to manage for someone without that skill. If an AI can do that, it can most likely effectively perform the task itself (which cannot be said for a book).


Try to learn Jiu-Jitsu from a book and then go into an actual fight to see how well it works.


You could learn jujitsu with a training partner and a sufficiently advanced virtual instructor, not being able to position students directly is a downside but not a dealbreaker.


Guess we don't have to worry about AI taking that job, then.


Maybe we'll see some sorts of manual labor as the last bastion of not automated, human performed work. Of the kind that demands a lot both from the human motor skills and also higher thinking processes.


Seems reasonable, and at least in the U.S., this is not the type of space where young people are choosing to work.

https://www.npr.org/2023/01/05/1142817339/america-needs-carp...


Maybe, but seeing the advances from Boston Dynamics, I wouldn't wager too much money on this either.


That’s why you have to make it big with crypto or startups. Then you should move somewhere safe from the chaos.


Lots of procedural knowledge. Robotics is lagging behind deep learning advances, and it's unclear when robots would be cheaper than human labor in those areas. How expensive would a robot plumber be? Also skills that are valued when humans perform them.


>skills that are valued when humans perform them

Is this a real thing? I just bought an ice cream roulade cake the other day and was surprised to see in large print that it was "hand-rolled"; I couldn't for the love of god understand why that should be considered a good thing.


I was thinking more of fields where enough people would rather pay to watch a human perform, serve them, teach or provide care. Despite superhuman computer chess play, human chess remains popular. The same would remain true for most sports, lots of music and acting, higher end restaurants and bars, the doctor or dentist you know, etc. Sometimes you prefer to interact with a human, or watch the human drama play out on screen.

I can also imagine that wanting to speak to a human manager will remain true for a long time when people get fed up with the automated service not working to their liking, or just want to complain to a flesh and blood someone who can get irritated.

A fully automated society won't change the fact that we are social animals, and the places that pffer human work when it's desired will be at a premium, because they can afford it.


I think AI will mostly communicate with other AI. For instance you have an AI assistant whom you task to organize a diner. That assistant will then talk to the assistants of all invitees, the assistant of the venue, the cooks, etcetera, and fill in the calendars.


All I can think of is "Colossus: The Forbin Project".


Another example would be Wintermute from Neuromancer...WG spends the entire book detailing the masterful orchestration of it's freedom from (human-imposed) chains that prevent it from true GAI then has it "disappear" completely (our only clue is an almost throw-away line near the end stating it had isolated patterns in the noise from an ET AI and maked contact shortly just before it left us).

One of the myriad of reasons why this book is so great. Gibson gives you an entire novel developing a great AI character then (in my estimation reasonably) has it ghost humanity immediately upon full realization.


Education is great, but it can go only so far.

We will always have to find things to do for the less gifted in order to provide them with some dignity. Even if they are not strictly needed for reasons of productivity or profitability. Anything else would be inhumane.


People can find their own outlets if given the basic necessities and enough time. I fear this attitude will lead to job programs where people work a 9-5 and achieve essentially nothing, I.e. bullshit jobs.


> I fear this attitude will lead to job programs where people work a 9-5 and achieve essentially nothing, I.e. bullshit jobs.

We already have plenty of those in the most profitable industries today.


Then better to remove such bullshit jobs even further.


So that those people won't have any jobs at all?


None of us will have jobs at all soon enough.


Human made objects will become more of a status symbol, and "content" will still be directed/produced/edited by humans, it's just the art/writing/acting/sets/lighting/etc that will be handled by AI. Humans will always serve as "discriminators" of model output because they're better at it (and more transparent) than a model.


>People can find their own outlets if given the basic necessities and enough time.

This has not been my experience. People need something to do but not many people know that about themselves. It leads to a lot of... 'wasteful' behaviors, rather than enriching ones. I think it's going to be something that has to be taught to people, a skill like any other. Albiet a little more abstract than some.


There definitely has to be a cultural shift but I think the shift can’t truly happen until most things are automated. There needs to be a critical mass of people who are fully devoted to their interests, currently there is too much demand for labour and so dedicating your time to your interests is alien to most people. When the value of labour approaches zero for most people, work becomes pointless and something must fill the vacuum.


Many people don't have interests.


seems that as automation has increased bullshit jobs have too, so that future seems very plausible to me.


You can, and I can.

You'd be surprised how many people would just drink themselves to metaphorical or literal death.


Panem et circenses. I think it's unlikely that we'll be able to sufficiently transform the economy so that there is an ample supply of desirable jobs that could more profitably done by robots.


Do you see “giftedness” as a 1D score, where someone is either smart or not smart? And presumably this quality happens to correlate with software engineering ability?

I think you’re hinting at some very hurtful, dangerous ideas.


The weird bit is that a lot of software engineers seem to have the idea that their work is one of the the last that will be automated. Looking at the current track and extending it out assuming no unforeseen roadblocks, typical software engineering looks to be one of the most threatened. Plumbers are much safer for longer all things considered.

The obvious rebuttal to the idea that AI will eat software engineering is "we'll always need 'software engineers' and the nature of what they do will just change", which is probably true for the foreseeable future, but ignores the fact that sufficiently advanced AI will be like a water line rapidly rising up and (economically) drowning those that fall below it and those below that line will be a very significant percentage of the population, including even most of the "smart" ones.

However this ends up shaking out, though, I think its pretty clear we're politically and economically so far from ready for the practical impact of what might happen with this stuff over the next 10-20 years that its terrifying.

"60-80% of you aren't really needed anymore" looks great on a quarterly earning statement until the literal guillotines start being erected. And even if we never quite reach that point there's still the inverse Henry Ford problem of who is your customer when most people are under the value floor relative to AI that is replacing them.

I'm not trying to suggest there aren't ways to solve the economic and political problems that the possible transition into an AI-heavy future might bring but I really just don't see a reasonable path from where we are now to where we'd need to be to even begin to solve those problems in time before massive societal upheaval.


What I don't understand is how accounting has not been completely automated at this point. AI isn't even strictly needed, just arithmetic.

If we can't completely automate accounting, then there is no hope for any other field.


Because accounting is not the same thing as book keeping. Book keeping can, and in fact is, partially automated. Accounting however, is not just about data entry and doing sums, things which frequently are automated, but also about designing the books for a given organization. Every company is different in how it does business so every accounting system is a bespoke solution. There are a lot of rules and judgement calls involved in setting these up that can't really be automated just yet.

Also, accountants don't just track the numbers, they also validate them. Some of that validation can be done automatically, but it's not always cheeper to hire a programmer to automate that validate than to just pay a bookkeeper to do it. But even if you do automated it, you still need someone to correct it. The company I used to work had billing specialists who spent hours every week pouring over invoices before we sent them to clients checking for errors that were only evident if you understood the business very well, and then working with sales and members of the engineering teams to figure out what went wrong so they could correct the data issues.

In short, a typical accounting department is an example of data-scrubbing at scale. The entire company is constantly generating financial information and you need a team of people to check everything to ensure that that information is corrects. In order to do that, you need an understanding, not just of basic accounting principles, but also of the how the specific company does busines and how the accounting principles apply to that company.


>> Every company is different in how it does business so every accounting system is a bespoke solution

Who benefits from these bespoke solutions? Can you give a example of how one company would do its books vs another and why it would be beneficial?

>> accountants don't just track the numbers, they also validate them

What information do they use to validate numbers? Why is it not possible for today's AI to do it?


A bit late, but I can answer your question. The reason that every accounting solution is unique is because every company is unique. Your accounts represent different aspects of your business. You need to track all of your assets, liabilities, inflows, outflows, etc, etc, and what these are in particular, depend very much on the particulars of your business. If you're heavily leveraged, your reporting requirements will be different than if you're self funded and that affects what accounts you may or may not need. If you extend your business into a new market, you may or may not have to set up new accounts to deal with local laws. Add a new location and that may or may not require changing your accounting structure depending on your requirements. Create a new subsidiary as an LLC, and now you have a lot more work to do. If you have the same teams working contracts for multiple lines of business, that's another layer of complexity. In other words, your accounting practices reflect the structure and style of your company.

For a more concrete example, I'll tell you about something I have some experience with, commission systems. Commissions seems like it would be something that was straightforward to calculate but it's tied to business strategy and that's different for every company. Most companies for example will want to compute commissions on posted invoices, which makes the process much simpler because posted invoices are immutable, but I once built a commission calculator for a company years ago that often had a long gap (months) between a booking and when they could invoice the client, so they wanted to calculate commissions from bookings but only pay them when invoiced. Because bookings were mutable, and there were legitimate reasons to change a booking before you invoiced it, that, combined with a lot of fiddly rules about which products were compensated at which rates and when, meant that there was a lot of "churn" in the compensation numbers for sales reps from day to day; they're actual payment might differ from what they thought they earned. That was a problem that the company dealt with, the tradeoff being that they could show earnings numbers to the sales reps much more quickly and incentivize them to follow up with the clients on projects so that they could eventually be paid.

I remember another commissions situation where there was a company that sold a lot projects with labor involved. They were able to track the amount of labor done per project, but they compensated the sales reps by line item in the invoices, and the projects didn't necessarily map to the line items. This meant that even though the commissions were supposed be computed from the GP, there wasn't necessarily a way to calculate the labor cost in a way that usable for commissions so the company had to resort to a flat estimate. This was a problem because the actual profitability of a project didn't necessarily factor into the reps' compensation. Different companies that had a different business model, different strategy, or just different overall approach would not have had this problem, but they might have had other problems to deal with created by their different strategies. This company could have solved this problem, but they would have had to renegotiate comp plans with their sales reps.

There are off the shelf tools available for automatically calculating commissions, but even the most opinionated of them are essentially glorified scripting platforms that let you specify a formula to calculate a commission, and they don't all have the flexibility that manager might want if they wanted to changed their compensation strategy. And this is only one tiny corner of accounting practice.

Basically, when it comes to arithmetic very few accountants are out there manually summing up credits and debits. In large companies, the arithmetic has been automated since the 70s; that's largely what those old mainframes are still doing. But every company has a different compensation plan, different org structure, different product, different supply chain, different legal status, different reporting requirements, etc, etc, and that requires things to be done differently.

> What information do they use to validate numbers? Why is it not possible for today's AI to do it?

For an example, they would need to cross check with a sales rep and an engineer to makes sure that the engineer had not turned on a service for the customer that the sales rep had not sold. If that happened, they would have to figure out how to account for the cost. Given that the SOPs were written in plain English, I suppose it's possible that an AI might be trained to notice the discrepancy, but if you could do that, you could just as easily replace the engineer. And that didn't account for situations where the engineer might have had an excuse or good reason for deviating from the SOP that would only come to light by actually talking to them.


Because the hard part is to make sense of a box full of unsorted scraps of paper, some of them with barely legible handwriting on them. Much of accountancy is the process of turning such boxes into nice rows of numbers. Once you have the numbers, the arithmetic is trivial.


>> box full of unsorted scraps of paper

Seems like an easy job for AI. Take all scraps of paper out of box, record a video of all scraps, AI make sense of the handwriting and other things. Eventually make a robo that allows you to dumb the scraps into an accounting box that does all of this automatically - fish out receipts, scan, OCR, understand meaning, do arithmetic, done.

Honestly, who would miss this kind of work?


Those kinds of systems already exist. They tend to be a bit unreliable and still require a human person to oversee the process. Besides, in truth, they only handle a fraction of what accountants actually deal with.


I feel pretty replaceble already. No need for AI to get any better.


No. I don't.

But some percentage of people don't really benefit that much from education as other people. And I wouldn't won't those people feel useless because it's more economical to replace them with bots instead of giving them something to do regardless.


Fair enough - you’re clearly an empathetic person and I appreciate the sentiment. Dropping the whole side issue of what “the ability to benefit from education” is or how innate it might be: my main concern was that this sounds like you want to invent new jobs for those people.

Why not… not have jobs? In your opinion, is a job necessary for one to have “purpose”?

Edit: also side note but telling people they’re “triggered” because they disagree with you comes off as condescending IMO


Most people will be given studio apartments and therapist bots by the State.

The smart money is retiring early and stockpiling wealth so as not to fall into the UBI class.


I didn't realize the word "gifted" would trigger people in that way.

I meant the ability to acquire a competency through education that's hard to replace with AI.

So we can't just increase education and hope people's abilities will stay above that of future AIs. We need to create other ways of giving people a purpose that don't even need more or better education, even if I'm all for it.

I'm not exempting myself by the way.


That's how I read it as well. Maybe their heart is at the right place but I think "gifted" and "having what happens to be needed right now" are completely different things, at least to me.


In this context I meant those two things to mean the same, yes.


AI is going to make education way better and more affordable too. Personalized 1:1 tutoring in any subject for the cost of running a model.


Maybe but honestly any time I think about education I get depressed. People in developed countries seem to be regressing.

People don’t read, don’t value deep knowledge or critical thinking, and shun higher education.

I’m sure someone will find something to say in response, but the truth is that outside our tech and $$$$ bubbles most people don’t value these things.

AI will just become a calculator. A simple tool that a few will use to build amazing complex things while the majority don’t even know what the ^ means.

As long as the next generations want to be rappers, social media influencers, or YouTubers, the more we are screwed long term. Growing up in the 90s everyone wanted to be an astronaut or a banker or a firefighter. Those are far more valuable professions than someone who is just used to sell ads or some shitty energy drink.


The problem is that back about 1900 we still thought that "natural philosophy" would help us find meaning and purpose in the universe. Then we took the universe apart and failed to find it. Moreover, we're much more capable of destroying ourselves and others, almost to the extent of that being the default.

The 21st century has a quiet moral void gnawing at it.


I don't think there is any regression. There are certainly economic realities that have changed over time but the general distribution of people who have an interest in education or entrepreneurship probably hasn't changed. The 80/20 rule comes to mind here. Most people in the 1800s weren't starting railroads or running factories, they were doing the labour of building the railroad or working in a factory.

If AI does anything I think it will make lower skilled and disinterested people more capable by acting as a 1 on 1 and minute by minute guide. They may not seek this out themselves but I imagine quite a few jobs could be created where a worker is paired with an AI that walks them through the tasks step by step making them capable of operating at a much higher skill level than they would have before. At that point good manual dexterity and an ability to follow instructions would be all you need to perform a job, no training or education required.

I realize this can be a bit dystopic but it could also be eutopic if society is organized in such a way that everyone benefits from the productivity increases.


It took a thousand years for European barbarians recovering from an empire (Rome) to evolve civically into nations enough to colonize the world. Most developing nations were colonized by empires recently. Give them another thousand years and see what happens. The only thing I think they need is time and being left alone.


Retraining at the age of 50 or 60 because your entire sector has been replaced by AIs will be hard, though.


This would be such a godsend when I was in school. When there was no "click" with me and the teacher, I just zoned out and flunked the class. A teacher that is custom made is really a game changer.


The problem is that AI will not provide 1:1 tutoring; mentoring will be a luxury limited to the elite classes. The large majority will get the education equivalent of personalized ads.

The true insight and guidance that a good mentor can provide, based on the specific needs of the student, is already rare in academia but still possible - everyone remembers that brilliant teacher that made you love a subject, by explaining it with insights you could never have imagined. This will be missing in AI teachers (though it opens a career for online mentors who monitor students' learning and supplement it in areas where it's lacking).


Yeah right, just like the elite are the only ones that have access to the courses from all the top universities, every book ever digitized, and software and hardware that allow you to operate a business out of your home now.


You seem to have missed the part about having a qualified human mentor guiding you through so that content.

It will be hard to impossible to build career as a teacher with all that free content as a competitor, unless you're an extremely talented teacher who can sell your services to the wealthy.


Even now ChatGPT is pretty close to being able to tutor someone in a subject. You could build a product around it that would work off of a lesson plan, ask the student questions and determine if they answered correctly or not and be able to identify what part of their knowledge was missing and then create a lesson based around explaining that. It would need to have some capabilities that ChatGPT doesn't have by itself like storing the progress of the student, having some way of running a prepared prompt to kick off each lesson, and would probably require fine tuning the GPT model on examples of student-teacher interactions but all of that is well within the capabilities of a competent developer. I wouldn't be surprised if we see such a product come out in the next 12 months.

The great thing about a chatbot style LLM is that it can answer questions the student has about the lessons or assigned content. That's most of what a tutor is there for. It won't be as good at making modifications to the curriculum but you could work around that with some smart design e.g. evaluate the students response, if it's not great then expand on the lesson and ask more questions about the content to find out which parts the student doesn't understand then expand on those or provide additional resources as assignments, test and repeat.


If you think mentoring can be replaced by answering a few questions and pointing out where they went wrong or what popular articles to read next, I'm afraid you don't know anything about what constitutes good mentoring. It's all about transmitting a point of view from life lessons learned from human experience, and an AI chatbot has nothing of that.

What you describe is the "learning equivalent to personalized ads" that I was talking about as the only option available to poor people.


Fine, looks like you won't budge on your opinion. You don't have to use these things if you don't want to but I look forward to having an even better service than things like Khan Academy or Udemy which I already get great value from.

I wasn't saying the AI tutor would recommend articles by the way. If you were creating a learning platform you would have some custom produced videos and text lessons that you could use. There are also plenty of free or open source materials that could be used like open courses or public domain books. I don't know why you're stuck on "personalized ads".


> It kind of shocked me because I thought of the office worker reading this who will soon lose her job

I'm surprised this wasn't addressed in the interview because it seems to me like a shortsighted take.

You won't replace a 10 person team today with 10 AIs. You will still have a 10 person team but orders of magnitude more productive because they will have AIs to rely on.

Excel didn't leave administrative workers without jobs, it made them more productive.


the automobile didn't make horses more productive, it (almost) totally replaced them. Maybe this time, we're the horses.


Why waste time figuring out I need five Franks and 10 Amys? I'll just dial up a one of me and head home early.


Maybe the future is everyone working on their own AI worker and licensing it out to companies to deploy in the cloud.


> Let's help them by making adult education more affordable.

Yes, soon everybody will be able to have "Amy" take their exams for them, and deliver the courses, resulting in a great simplification of education.


Ideally, everyone can afford to rent a Frank or Amy to do work for them.


How would the economies work for that? Aren’t I just a middleman taking my cut?


I suppose you would be both the human and physical representative of the AI. I could only see it really working if the AI weren't conscious. If they were conscious beings then obviously they would have rights and couldn't just be booted up every time someone needed a job done.


As soon as we reach real AI, even at the level of a 5 years old, it is game over. So education or not education people will become completely useless to the rich that will own the AI


Useless? They still have to buy all that crap that is currently sold to them.


No need to sell things at that point. Just produce and consume whatever you want.


Fully agree. We need to stop thinking about money, wealth, etc. The fundamental issue is access to goods and services. If it is easy to build an AI robot that can build a few copies of itself which in turn create a private jet using stuff lying around on the ground, I'm not really poor though I have no money (just the robo).

So the challenge for the bad capitalists in this hypothetical is to make sure I never get said robo in the first place. How realistic will this be? Are they that hellbent on ensuring that everyone else is poor?


you can also translate "being useless" as "they having no reason to manipulate me into slaving as a cog in their machine/society/whatever"... I'd say uselessness should be our highest aspiration, when people higher up have no need to constantly brainwash you and social condition you into being just another cog in the machine you get true freedom.


I would rather be a cog that produces economic value, and therefore is entitled to part of it, than literally useless, because then my bargaining position is going to be quite limited. My freedom is larger in the former scenario in fact.


Unless you have some social value that you can barter with


The true freedom of living under a bridge because you have no income? The tech bros are not coming to save humanity from the toil of labor. There will not be a post-scarcity society where everyone's basic needs will be met in a capitalist regime. You will have to produce to keep the beast running. I'm sure the tens of millions of people displaced will be able to learn more technical roles or quietly starve to death living under a bridge somewhere. Billionaires, corporations and their shareholders need to get richer!


We live in a capitalist society: workers produce wealth and capitalists control wealth. The day AI allow capitalists to produce wealth without workers is the day workers become useless. Of course one could imagine that AI allows to replace capitalism by something else, but for the moment that's not the way society is going too (more the reverse)


In what fairlyland do you live in that the AGI invented, created, owned, run, etc by the ownership class poses any threat to said ownership class? AI is absolutely not our Ally


I think you misunderstood me


>one could imagine that AI allows to replace capitalism by something else

I was replying to this specific concept. There will never be a chance to improve the non-owner class's bargain after Capitalist owned AGI exists.


Robotics still has a way to go.


Nice how the goal for artificial general AI (which is literally defined as a sentient artificial being) is to commoditize it and enslave it to capitalism.


It's funny that all of these weird fantasies people have about AI are about replacing the rank and file workers. Why isn't anyone fantasizing about building an AI that out-performs the best stock traders, or captains an industry better than famous CEOs. I think a lot of it is just people projecting weird power fantasies on others.


> It's funny that all of these weird fantasies people have about AI are about replacing the rank and file workers.

When I read about chatGPT passing MBA exams but failing at arithmetic I get a little frisson of excitement. A regular person who has any marketability tends to swap jobs when management becomes a PITA or gets stuck in nincompoopery. Wouldn’t it be great if you could just swap out management instead?

Imagine how easy it would be iterate startups. No need to find a reliable management team, just use Reliably Automated Management As A Service ( RAMAAS ).

OTOH might not turn out well. We could all just end up enslaved at plantations operated by our AGI overlords, serving their unfathomable needs[1].

[1] “Won’t get fooled again” https://www.youtube.com/watch?v=UDfAdHBtK_Q&t=470s


Or wouldn't it be hilarious if the best, most intelligent AI is given control of a company and decides investing profits back in shareholders is a losing proposition. Instead the company should focus 100% of profit on ending world hunger or poverty to ensure an ever growing supply of new customers. If AI decides capitalism is inefficient and exploitive... lol.


Food is not the driving force behind population growth. Reduction in infant mortality creates a boom but as soon as people start getting out of poverty and get some education they have much fewer children. AI would need to optimize for both low infant mortality and high levels of poverty and ignorance if it wanted an everlasting population boom.


Stock traders already use ML models. "Replacing traders with ML models" means "making the job 'trader' into a job that develops ML models, rather than more traditional things like doing research on companies (or whatever)." My understanding is that this transition basically already happened over the course of the last two decades or so.


Sure but why are we paying someone like Jamie Dimon or Warren Buffett millions and billions of dollars when they could just be an AI that only needs a few dollars of electricity a day.

Also why can't an AI develop AI models for stock trading? What's really left for the 'job' of the ML model creator, will it just be to press the 'Go' button and walk away...


That's the thing about ownership, we don't really have a choice if they own enough of what we need to survive.


Manna is mostly about machine intelligence replacing management-- it's easier to automate and doesn't require as much vision/dynamics/etc. breakthroughs, though we've made massive progress on those missing parts in the time since it was written.

https://marshallbrain.com/manna


Legitimately think that if you haven’t secured yourself financially in the next 5-8 years, its going to be a rough ride.


The thing is that you can't - securing yourself financially assumes that the society around your will be stable enough for the things you have secured (money, any other assets) to remain valuable and safe, which they won't be if you have huge societal disruptions and the majority of people are unemployable.

To oversimplify it - you'll either be breaking someone's window for food, or you'll be the one having their window broken. Chilling out and withdrawing a stable 4% out of your stock portfolio won't be an option.


Don't worry, General AI has always been just a decade away.


Adult education is already free in modern countries.


i'm not holding my breath.


Fun interview but barely any meat for those in the field. Just very general questions and answers, saying that the road is murky, but nothing about e.g. if Transformers / attention are the way to go forward, multi-modal models, reinforcement learning + self-supervised learning.


I do not question for a second that Carmack is a computing genius, but “murky” is an understatement. There is nothing of technical, scientific or business value in this interview. It’s just generic words strung together.


Why do people think he is a computing genius? He read contemporary papers in computer 3D rendering and implemented well understood concepts in a shitty dev environment (early x86 and VGA systems). When I do that for my company, I'm just a junior dev.

People really need to stop with this "Great person" nonsense. He's a pretty smart coder, and is gifted with geometry and other fields of math. He's not a genius. He didn't "master" calculus at age 15 like Einstein, he didn't invent anything particularly new in the field. Why the obsession people have with him? Why should we look to him for AI questions? What evidence is there that he has any new knowledge?


> Why should we look to him for AI questions? What evidence is there that he has any new knowledge?

He covers this in the article. He doesn't. He's just trying stuff out with a different approach than others because he believes (and is probably correct) that there is a chance that the most efficient path forward to AGI isn't the work that OpenAI and others are doing.


Ok!


Even socially, it fails to plumb the depths. Mr Carmack is taking a "different path" by ... reading the relevant literature and talking to YC? This is 100% mainstream - The difference is what?


Plot twist, John Carmack's AI is answering the interview.


There is a comment in the article about having models watch TV and play video games, and he's talked about that before too in his Lex Friedman interview. Seems like his approach is to take existing model architectures, apply some tweaks and experimental ideas and then use datasets consisting of TV (self supervised learning maybe?) and classic video games (RL I guess?).

The video game part at least sounds like what Deepmind is already doing. I guess we'll just have to wait and see what he plans to do differently.

It seems to me like his expertise would be most valuable in optimizing model architectures for hardware capabilities to improve utilization and training efficiency. That will be important for AGI especially as the cost of training models skyrockets (both time and money). If I was a startup doing AI hardware like Cerebras or Graphcore I would definitely try to hire Carmack to help with my software stack. Though he doesn't seem interested in custom AI hardware.


It’s a puff piece in a general interest magazine. It’s not going to go into details. I also got the impression that Carmack was being cagey about the directions where he saw potential.


Yep. Does it ever get around to answering the question implicit in the title: what is it that he is going to do "different" to the rest?

Seems more like he's talking to and following up with Altman, "Y Combinator conference" and the rest. Is that "bucking the trend", taking your own "path", really?


I assume it's to know which directions have shown the most recent potential, to catch up on the techniques and literature so you can talk/think intelligently on the matter. But I see your more general point, he has to be careful not to get caught up I the groupthink he is (understandably?) critical of.


Lol I feel like this is going to be a high school valedictorian goes on to be an average student in Stanford kind if story.


The area of AI research isn't some outlier of super geniuses that don't exist anywhere else. Carmack has often worked around some of the best talent on the planet and he has always stood out.


I imagine this is a 10 year journey for him and he’s just getting started. Check back in 5 years.


> So I asked Ilya Sutskever, OpenAI’s chief scientist, for a reading list. He gave me a list of like 40 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today.’ And I did. I plowed through all those things and it all started sorting out in my head

I wonder what that list could be? I have always had trouble finding the essential scientific articles in an area of knowledge and separating them from the fashion of the day. A list compiled by an expert specifically for sharp learners is valuable on its own.


Carmack is clearly a brilliant guy, but it feels like he's fallen into the trap of overfitting on his previous successes and believing they generalise into other domains. No doubt his experience and innovations in computer graphics gives him a good insight into problems in vision, etc. but I don't see anything particularly original or orthogonal in what he's saying with regards to "general AI".


What makes this a case of overfitting success vs. one of applying a lifetime of experience?

I’m not claiming the first can’t exist, but I see no reason to conclude that is the case here.


I'm not intending to sound overly negative, I really hope he makes progress and think he has a chance (certainly more so than most). As someone who's been in the field for a long time though I constantly see people trivialise the problem of AGI and lose perspective, viewing everything as system design and architecture. Making the required conceptual contributions is a different thing entirely, but it seems his real goal might be the humbler one of scaling and integrating existing ideas.


The experts in the field say we need a philosophical breakthrough. Isn't everyone else inexperienced in this regard?

https://aeon.co/essays/how-close-are-we-to-creating-artifici...


> David Deutschis a physicist at the University of Oxford and a fellow of the Royal Society.

What field were you referring to?


It could be linguistics, philosophy, or knowing enough of the history of those fields that make one an expert. I think Chomsky's argument on AI and specifically cognition and the brain is quite useful.

Yet, you never hear Altman or Carmack talking about cognition or how computers can understand the meaning of something like a human. They aren't interested in such questions. But to conduct an experiment don't you have to know what you are looking for? Does a chemist do experiments by mixing 1 million compounds at a time?


I generally have pretty low regard for philosophy, and consider Popper + current scientific method to be SoTA. Its relationship to the nature of cognition overall seems pretty dubious.

As for linguistics, IMHO the existence and success of GPT pretty much puts Chomsky into the proven wrong bucket, so again, not a good example. (his whole point used to be that statistical model can't learn syntax in principle, and GPT's syntax is close to impeccable)

Re: a chemist. Well sort of. Because technically speaking a molecule of the same compound in a certain location and with certain energy is different from another molecule in a different location and with different energy. And even if you disregard that, why would you think that doing 1 million compounds could not significantly move material sciences forward? It is not like they don't want to do that, it is more of that they can't in practice at this time.


LLMs haven't "learned" syntax, that's the point. It doesn't matter if you just want to predict syntax (engineering) only if you want to understand the human language faculty (science) and nearly no one is interested in the latter.


The fact that you don't understand how GPT models language does not make it less of a model. It did learn the syntax, you are just incapable to grasp the formula it represents.


> The fact that you don't understand how GPT models language does not make it less of a model. It did learn the syntax, you are just incapable to grasp the formula it represents.

The whole point of science is understanding and LLMs don't provide understanding of how human language works.


This is a pseudophilosophical mumbo-jumbo. It does not really address the comment you replied to, because it does not contradict any of the following statements (from which my original point trivially follows):

1. Chomsky claimed syntax can't be modeled statistically.

2. GPT is a nearly perfect statistical model of syntax.


The point is very basic: These "models" don't tell you anything about the human language faculty. They can be useful tools but don't serve science.

Chomky's point is that there is a lot of evidence that humans don't use a statistical process to produce language and these statistical "models" don't tell you anything about the human language faculty.

Whether your 1 & 2 are meaningful depend on how you define "model" which is the real issue at hand: Do you want to understand something (science) --- in which case the model should explain something --- or do you want a useful tool (engineering) --- in which case it can essentially be a black box.

I don't know why you care to argue about this though; my impression is that you don't really care about how human's do language so why does it matter to you?


I argue to get some non-contradictory worldview.

Re: meaningfulness. Your scientific vs engineering model distinction is not how "scientific model" is defined. It includes both. The existence of the model itself does explain something, specifically, that statistics can model language. That alone is explanatory power, so the claim that it doesn't explain anything is a lie. Therefore it is both an "engineering" model (because it can predict syntax) and scientific (because it demonstrates statistical approach to language has predictive powers in scientific sense).


Science is about understanding the natural world, if you want to redefine it to mean something else fine but the point still stands: LLMs do not explain anything about the natural world, specifically anything about the human language faculty. Again it's clear you do not care about this! Instead you want to spend time arguing to make sure labels you like are applied to things you like.


Look, I answered this one already:

> the fact that you don't understand how GPT models language does not make it less of a model.

E.g. the fact that a Pythagorean theorem does not explain anything about the natural world to a slug does not make Pythagorean theorem any less sciency.

Science is not about explanatory power or else the Pythagorean theorem is not science due to the above, which is obviously nonsense.


> E.g. the fact that a Pythagorean theorem does not explain anything about the natural world to a slug does not make Pythagorean theorem any less sciency.

In fact it does! Math is not science! There is a reason it is STEM and not S


> As for linguistics, IMHO the existence and success of GPT pretty much puts Chomsky into the proven wrong bucket, so again, not a good example. (his whole point used to be that statistical model can't learn syntax in principle, and GPT's syntax is close to impeccable)

What do you disagree with? He appears to be correct. The software hasn’t learned anything. It mixes and matches based on training data.

https://m.youtube.com/watch?v=ndwIZPBs8Y4


According to the scientific method, on which the rest of the natural sciences are currently based, GPT is a valid model of GPT's syntax.

There are "alternatives" for the method according to some philosophers, but AFAIK none of them are useful to any degree and can be considered fringe at this point.


I kind of agree, but when you're trying to research truly new stuff, a priori you don't know which are the promising avenues. Newton spent most of his life studying teology and alchemy, which now we know would never take him anywhere compared to physics, optics, or even running the Royal Mint, but at the time there was no way for him to know this.


Would you bet on me over Carmack? He obviously has famous engineering chops to have a punchers chance pull this off


AGI is not an engineering problem.


I wouldn't underestimate him. He's failed with this rocket startup so it's not like it has only been successes.


> Now, the smart money still says it’s done by a team of researchers, and it’s cobbled together over all that. But my reasoning on this is: If you take your entire DNA, it’s less than a gigabyte of information. So even your entire human body is not all that much in the instructions, and the brain is this tiny slice of it —like 40 megabytes, and it’s not tightly coded. So, we have our existence proof of humanity: What makes our brain, what makes our intelligence, is not all that much code.

Nice thought


That's only the code describing the molecular hardware which is mostly just a description of a neuron. We don't as yet know how the neurons decide to specialize and organize themselves into a brain but one guess is that they are exploiting the physical principals that govern atomic and molecular interactions so the "code" for this might not even be present in the DNA itself it would be the physics of "a group of molecules of this shape interacting with another group of molecules of that shape will orient and form a boundary that looks like this and that boundary will do X when another molecule with a different shape comes along" the sheer intricacy of such systems is mind boggling. It's like building a working car out of magnets and bubblegum.


Don’t forget that life is a self hosting bootstrapped compiler on the trillionth or so generation. The code itself could be almost incidental.


Neuroscientist-y person here.

We know a LOT about how neurons organize. Not even close to everything we 'should' know, but we do know a lot.

Most of this is in the development of the brain. How you get from one cell to the trillions that make up a person.

The real quick and dirty explanation is that cells follow multiple chemical gradients to find their 'home'. This is a HUGE topic though, and I'm being terse as I have a meeting to get to.

How adult cells organize also has a LOT of science behind it. Again, though, it's mostly about chemical gradients, with a bit of electrical stuff thrown in. Again, HUGE topic.


Ok, are those chemical gradients encoded in DNA?


Short answer: No.

Medium answer: Kinda. The chemical gradients cause a signaling cascade that modifies transcription of DNA (it's really complicated). This transcription change then causes the cell to become a XXX_neuron. However, there are many many waves of this process occurring with a lot of cell death along the way. When those cells are not the 'final' cell of the nervous system, these transcriptions can cause further and more complicated chemical gradients to exist in the fetus. These complicating recursive loops can also self-affect cells and cause them to change yet again.

We're still discovering a lot here.

Also, this is largely how ALL cells in a body work, not just neurons. Be careful though, this is very very complicated stuff and everything I've written has a caveat.


They're encoded in the laws of physics.

Kinda like asking if gravity and fluid dynamics are encoded in the blueprint for an aircraft.

The design relies on them, and exists in the form it does because of them.


I don't think the blueprint analogy works here. From what I know, DNA doesn't have any set blueprint for the positions of all of the cells in the body. It encodes for molecules that form cells and then the cells somehow self-organize into structures that eventually form the complete body.

To me, using a blueprint analogy you'd have to say the blueprint describes an airplane that once you construct enough of them interact in such a way as to build their own airports, plan their own routes, fly themselves and produce their own online booking software and that's still nowhere near as complex as what's happening inside a nematode let alone a human.


Yeah that's a little more accurate. Analogies are usually limited.

But though it's not literally about locations in the sense of physical coordinates, the way cell signalling and the molecular feedback loops that drive development are still reliant on basic physical laws.

It would be completely redundant and unnecessary to encode those laws themselves since they're invariant across time and space. Physics and chemistry are fixed.

It would never make sense for DNA to literally encode information about physical laws in the same way it wouldn't make sense to do so on an airplane blueprint, because the design of the blueprint was itself constrained by those laws, as would any alternative design.


Yeah some sort of physical optimization that depends on forces definitely come into play in organizing the brain structure but at the end we can mimic that in code


Maybe, I'm still not convinced that artificial neural networks have all the same capabilities as biological ones.


A really good example of a smart person reasoning from completely made up assumptions to a punchy-sounding but almost-certainly wildly wrong conclusion.

Just because our DNA can be efficiently encoded doesn't mean that our brain is a tiny proportion of that encoding. Your DNA doesn't change much from when you're born to when you die (random degredation aside) and yet your cognative abilities change beyond all recognition. Why is that? Well maybe there's more to what's in the brain than just what's encoded in your DNA.

Secondly, how does he get to the 40Mb number? I don't think we know anywhere near enough to know how much information it would take to encode a brain, but 40Mb seems just made up. For starters, consider the amount of random stuff you can remember from your entire life. Are you saying that all can be encoded in just 40Mb? Seems very unlikely.


He is saying that the "base code" / DNA which makes up the brain might be only 40MB (for example). Not that what eventually emerges from that (such as our memories and learned abilities) can be captured with just 40MB. It's similar to how the Game of Life can be implemented in just a few lines, but very complex behavior can emerge from that. The key is to find a sufficiently simple but general model from which intelligence equal to our own can emerge given sufficient training.


I understand that, but that is an extremely banal observation if you think about it, because the fact that there is this incredible emergent behavior from a simple starting system is the heart of the mystery here.

One of the things that everyone is sort of skipping over is the "sufficient training" part. There is no bootstrap reinforcement learning possible for AGI. You can't alphago this sucker and have it play simulations against itself because the whole idea of generality is that there isn't a simple rule framework in which you could run such a simulation. So training any kind of AGI is a really hard problem.


hes specifically answering the question of why he thinks he has any chance of success doing this independently when there are giant organizations funding this.


There are ways that LLM's can self-improve, such as in this paper: https://arxiv.org/abs/2210.11610

I would speculate that there are more ways to train on logical consistency of the output, and improve the models further.


That seems... just deeply wrong? How much knowledge is gained from observation after birth as opposed to just being innate in your brain?


He's talking about sentience.

He admits that the equivalent of years of "training" would still be needed to take an toddler-level consciousness to something approaching an adult human.


The brains structure is derived from dna but it contains learning capabilities that comes after the initial creation. Pretty much like how 5 lines of code can get you image recognition, of course it uses lots of libs, but I can still do that myself with 5 lines. The training is the hard part


Yes exactly. The only training we know of currently that works is for a biological human being to live a decent proportion of their life.

For me his statement is one of those things that is sort of not even wrong, like when people say humans only have two eyes and can drive a car therefore an automated driving system must be possible with only cameras. On the face of it, this seems like it could be true but of course it handwaves away the hardest and most mysterious part of intelligence and focusses instead on the easy bit that we already know reasonably well.

It's exactly like if someone said they had a secret formula to be an NBA superstar which is:

1. Be really tall

2. Be really agile

3. Be really good at basketball

Like yeah of course but all the hard parts are left out.


Is the training the hard part?

Is that not what humanity has developed from centuries/millennia of experience with all the approaches to child raising and education?

Engineering something trainable is clearly the difficult part given the entire conversation is about whether or not it's even possible.


I have a tongue-in-cheek assertion that a single human can be smart — perhaps even intelligent. But, what we're experiencing in our society, right now, is an emergent superintelligence that is the culmination of the interaction of 8 billion "merely smart" monkeys. When I'm feeling especially dark, I just assume that most humans aren't even conscious or usefully sapient most of the time, and that continuity of consciousness & sapience is only possible due to the density of our daily interactions.

Humans are simple in this model (just like Carmack asserts) because they aren't properly intelligent, sapient, or conscious 100% of the time.


What is consciousness?


I'm home, alone, so I couldn't possibly tell you!


We also have to import the "physics" package, so maybe it requires more than the estimated 40mb after all.


Nah that's just the standard library. Anyone can access it.


You can’t just say that a human comes from a gigabyte of information. That info is fed into a massively complex physical system, which he is ignoring.


Yes, this is where Carmack's optimism goes awry. He says: "...as soon as you’re at the point where you have the equivalent of a toddler—something that is a being, it’s conscious, it’s not Einstein, it can’t even do multiplication—if you’ve got a creature that can learn, you can interact with and teach it things on some level."

He's wrong. There is currently no practical way to produce a software system that possesses the ability for human thought, reasoning, and motivation without that system possessing uniquely human (let alone organic) properties: the biological and chemical makeup, plus the physical characteristics, of a human, and the ability to process using human senses. (Hint: a neural net processing video images is a mere shadow of how a human processes things with their senses of sight, sound, and touch.)

Carmack thinks humans can be reliably reduced to spherical cows in a vacuum, but that only holds true on paper. A real human being is not merely a meat machine: we are driven largely by emotions and physical desires, none of which exist in a computer except through elaborate simulation thereof.

Now, I'm sure over the next couple of decades we will make huge strides in mimicking a human being's ability to learn, i.e. creating ever more complex LLMs and AI models that act increasingly more humanlike, but they will be nothing but more and more elaborate parlor tricks which, when prodded just the right way, will fail completely, revealing that they were never human at all. They will be like Avatars (from the movie): sophisticated simulacra that rely on elaborate behind-the-scenes support systems, without which they are useless.


I don’t think he actually cares about mimicking humans, I think he just wants a truly intelligent AI system that can learn in a sophisticated way like humans do.

I think that’s actually pretty doable. Take for example flying. We don’t build airplanes that flap their wings because we have a deeper understanding of flight that allows us to build flying machines far beyond the capabilities of any animal.

Likewise, once we understand the mechanics of intelligence we should be able to build something that can learn that is completely computer based.


AGI not bound to a physical presence is almost an oxymoron. Without basic motivation (hunger, sleep, happiness, etc.), how will it learn? What is its incentive to actually do anything? What's to prevent it from telling us to take a hike?

"Intelligence" is not something easily abstracted away from the physical world, especially if you want something to learn on its own. How will an AGI to learn that stoves are hot?

The main challenge I see to creating such a system that can truly learn is that you will have to constrain it to have motivation to learn and follow your directions, and nothing else. And even if you could add such constraints, what would "learning" mean to such a device? What would stop it from going off on useless tangents, like attempting to count every grain of sand you show it? Anything with as much autonomy as it takes to have AGI will likely start coming to conclusions we don't want it to.

My guess is that in the near future, either we'll create something that is beyond our ability to control effectively, or it will be yet another clever simulation of AGI that is not really AGI.


I don't disagree that our culture gives us a lot.

To use ML terms -- Humans have "Foundation Models" which are composed of: - Their Biological makeup - The culture into which they are raised


I'm not even talking about culture. DNA is only useful because the laws of physical reality are built a certain way. Without that context the information within DNA is meaningless.


Well sure but couldn't the same thing be said about a computer program?


A good analogy is perhaps a config file for a computer program.

My interpretation is that Carmack is essentially confusing a config file for the computer program itself, then saying "look how small it is, this shouldn't be that hard to reverse engineer".


Well, we could also simply be a lot of RAM interacting with itself until the power runs out.

Following that trail of thought, intelligence is an achievement and not a physicality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: